首页

欢迎

 

Welcome

欢迎来到这里, 这是一个学习数学、讨论数学的网站.

转到问题

请输入问题号, 例如: 2512

IMAGINE, THINK, and DO
How to be a scientist, mathematician and an engineer, all in one?
--- S. Muthu Muthukrishnan

Local Notes

Local Notes 是一款 Windows 下的笔记系统.

Local Notes 下载

Sowya

Sowya 是一款运行于 Windows 下的计算软件.

详情

下载 Sowya.7z (包含最新版的 Sowya.exe and SowyaApp.exe)


注: 自 v0.550 开始, Calculator 更名为 Sowya. [Sowya] 是吴语中数学的发音, 可在 cn.bing.com/translator 中输入 Sowya, 听其英语发音或法语发音.





注册

欢迎注册, 您的参与将会促进数学交流. 注册

在注册之前, 或许您想先试用一下. 测试帐号: usertest 密码: usertest. 请不要更改密码.


我制作的 slides

Problem

随机显示问题

Problèmes d'affichage aléatoires

概率统计 >> 概率论
Questions in category: 概率论 (Probability).

[Notes] Chapter 3. Discrete Random Variables and Probability Distributions

Posted by haifeng on 2020-03-16 16:35:35 last update 2020-03-16 16:58:45 | Answers (0)


Introduction:

Two types of random variables:

  • discrete random variables
  • continuous random variables

 

[Def] For a given sample space $\mathcal{S}$ of some experiment, a random variable is any rule that associates a number with each outcome in $\mathcal{S}$.

$X(s)=x$ means that $x$ is the value associated with the outcomes by the rv $X$.

Here we use the abbreviation "rv" to stand for random variable. 

 

[Def] Any random variable whose only possible values are 0 and 1 is called Bernoulli random variable. (伯努利随机变量)

 

[Def] A discrete random variable is an rv whose possible values consitute a countable set.

[Def] A random variable is continuous if its set of possible values consist of an entire interval on the number line.

 


Section 3.2 Probability Distributions for Discrete Random Variables (离散型随机变量的概率分布)

The probability distribution of $X$ says how the total probability of 1 is distributed among (allocated to) the various possible $X$ values.

 

[Def] The probability distribution or probability mass function (pmf) of a discrete rv is defined for every number $x$ by $p(x)=P(X=x)=P(\text{all} s\in\mathcal{S}; X(s)=x)$.

Here $P(X=x)$ is read "the probability that the rv $X$ assumes the value $x$".

In words, for every possible value $x$ of the random variable, the pmf specifies the probability of observing that value when the experiment is performed. The conditions $p(x)\geqslant 0$ and $\sum_{\text{all possible } x}p(x)=1$ are required of any pmf.

 

[Def] Suppose $p(x)$ depends on a quantity that can be assigned any one of a number of possible values, with each different value determining a different probability distribution. Such a quantity is called a parameter of the distribution. 

The collection of all probability distributions for different values of the parameter is called a family of probability distributions.

 

The Cumulative Distribution Function (CDF) (累积分布函数)

For some fixed value $x$, we often wish to compute the probability that the observed value of $X$ will be at most $x$.

When $X$ is a discete random variable and $x$ is a possible value of $X$, we have

\[
P(X < x) < P(X\leqslant x)
\]

 

[Def] The cumulative distribution function (cdf) $F(x)$ of a discrete rv $X$ with pmf $p(x)$ is defined for every number $x$ by

\[
F(x)=P(X\leqslant x)=\sum_{y:\ y\leqslant x}p(y)
\]

For any number $x$, $F(x)$ is the probability that the observed value of $X$ will be at most $x$.

 

The cdf has been derived from the pmf. It is possible to reverse this procedure and obtain the pmf from the cdf whenever the later function is available.

[Prop] For any two numbers $a$ and $b$ with $a\leqslant b$, 

\[
P(a\leqslant x\leqslant b)=F(b)-F(a-)
\]

where "$a-$" represents the largest possible $X$ value that is strictly less than $a$.

In particular, if the only possible values are integers, and if $a$ and $b$ are integers, then

\[
\begin{split}
P(a\leqslant x\leqslant b)&=P(X=a \text{or} a+1 \text{or} \cdots \text{or} b)\\
&=F(b)-F(a-1)
\end{split}
\]

Taking $b=a$, yields

\[
P(X=a)=F(a)-F(a-1)
\]

in this case.

 

Subsection 3.3 Expected Values of Discrete Random Variables (离散型随机变量的期望)

[Def] Let $X$ be  a discrete rv with set of possible values $D$ and pmf $p(x)$. The expected value or mean value of $X$, denoted by $E(X)$ or $\mu_X$, is

\[
E(X)=\mu_X=\sum_{x\in D}x\cdot p(x)
\]

When it is clear to which $X$ the expected value refers, $\mu$ rather $\mu_X$ is often used.

 


Remark:

These notes are copied for the following book:

Jay L. Devore, Probability and Statistics For Engineering and The Sciences (Fifth Edition)